90 research outputs found

    Final Iterations in Interior Point Models -- Preconditioned Conjugate Gradients and Modified Search Directions

    Get PDF
    In this article we consider modified search directions in the endgame of interior point methods for linear programming. In this stage, the normal equations determining the search directions become ill-conditioned. The modified search directions are computered by solving perturbed systems in which the systems may be solved efficiently by the preconditioned conjugate gradient solver. We prove the convergence of the interior point methods using the modified search directions and show that each barrier problem is solved with a superlinear convergence rate. A variation of Cholesky factorization is presented for computing a better preconditioner when the normal equations are ill-conditioned. These ideas have been implemented successfully and the numerical results show that the algorithms enhance the performance of the preconditioned conjugate gradients-based interior point methods

    Adaptive Use of Iterative Methods in Predictor-Corrector Interior Point Methods for Linear Programming

    Get PDF
    In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. We summarize earlier results using a modified version of the OB1-R code of Lustig, Marsten, and Shanno, and we present results from a predictor-corrector code PCx modified to use adaptive iteration. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. (Also cross-referenced as UMIACS-TR-99-21

    Adaptive Use of Iterative Methods in Interior Point Methods for Linear Programming

    Get PDF
    In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. These ideas are tested by applying a modified version of the OB1-R code of Lustig, Marsten, and Shanno to a variety of problems from the NETLIB and other collections. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. (Also cross-referenced as UMIACS-TR-95-111

    ConDistFL: Conditional Distillation for Federated Learning from Partially Annotated Data

    Full text link
    Developing a generalized segmentation model capable of simultaneously delineating multiple organs and diseases is highly desirable. Federated learning (FL) is a key technology enabling the collaborative development of a model without exchanging training data. However, the limited access to fully annotated training data poses a major challenge to training generalizable models. We propose "ConDistFL", a framework to solve this problem by combining FL with knowledge distillation. Local models can extract the knowledge of unlabeled organs and tumors from partially annotated data from the global model with an adequately designed conditional probability representation. We validate our framework on four distinct partially annotated abdominal CT datasets from the MSD and KiTS19 challenges. The experimental results show that the proposed framework significantly outperforms FedAvg and FedOpt baselines. Moreover, the performance on an external test dataset demonstrates superior generalizability compared to models trained on each dataset separately. Our ablation study suggests that ConDistFL can perform well without frequent aggregation, reducing the communication cost of FL. Our implementation will be available at https://github.com/NVIDIA/NVFlare/tree/dev/research/condist-fl

    A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

    Get PDF
    Summary. The Jacobi-Davidson (JD) algorithm recently has gained popularity for finding a few selected interior eigenvalues of large sparse polynomial eigenvalue problems, which commonly appear in many computational science and engineering PDE based applications. As other inner-outer algorithms like Newton type method, the bottleneck of the JD algorithm is to solve approximately the inner correction equation. In the previous work, [Hwang, Wei, Huang, and Wang, A Parallel Additive Schwarz Preconditioned Jacobi-Davidson (ASPJD) Algorithm for Polynomial Eigenvalue Problems in Quantum Dot (QD) Simulation, Journal of Computational Physics (2010)], the authors proposed a parallel restricted additive Schwarz preconditioner in conjunction with a parallel Krylov subspace method to accelerate the convergence of the JD algorithm. Based on the previous computational experiences on the algorithmic parameter tuning for the ASPJD algorithm, we further investigate the parallel performance of a PETSc based ASPJD eigensolver on the Blue Gene/P, and a QD quintic eigenvalue problem is used as an example to demonstrate its scalability by showing the excellent strong scaling up to 2,048 cores

    Automated Pancreas Segmentation Using Multi-institutional Collaborative Deep Learning

    Full text link
    The performance of deep learning-based methods strongly relies on the number of datasets used for training. Many efforts have been made to increase the data in the medical image analysis field. However, unlike photography images, it is hard to generate centralized databases to collect medical images because of numerous technical, legal, and privacy issues. In this work, we study the use of federated learning between two institutions in a real-world setting to collaboratively train a model without sharing the raw data across national boundaries. We quantitatively compare the segmentation models obtained with federated learning and local training alone. Our experimental results show that federated learning models have higher generalizability than standalone training.Comment: Accepted by MICCAI DCL Workshop 202

    Kidney outcomes using a sustained ≥40% decline in eGFR: A meta-analysis of SGLT2 inhibitor trials

    Get PDF
    Background: A recent meta-analysis of sodium–glucose cotransporter 2 (SGLT2) inhibitor outcome trials reported that SGLT2 inhibitors were associated with reduction in the risk of adverse composite kidney outcomes, with moderate heterogeneity across the trials; however, the endpoints were defined differently across the trials. Hypothesis: The apparent heterogeneity of the meta-analysis of kidney composite outcomes of SGLT2 inhibitor trials will be substantially reduced by using a consistent assessment of sustained ≥40% decline in eGFR/chronic kidney dialysis/transplantation/renal death across trials. Methods: We performed a meta-analysis of kidney composite outcomes from the four SGLT2 cardiovascular outcome trial programs conducted in general type 2 diabetes mellitus populations, which included, as a surrogate of progression to kidney failure, a sustained ≥40% decline in eGFR along with kidney replacement therapy and kidney death. The trials assessed were VERTIS CV (NCT01986881), CANVAS Program (NCT01032629 and NCT01989754), DECLARE-TIMI 58 (NCT01730534), and EMPA-REG OUTCOME (NCT01131676). Results: Data from the trials comprised 42 516 individual participants; overall, 998 composite kidney events occurred. SGLT2 inhibition was associated with a significant reduction in the kidney composite endpoint (HR 0.58 [95% CI 0.51–0.65]) and with a highly consistent effect across the trials (Q statistic p = .64; I 2 = 0.0%). Conclusions: Our meta-analysis highlights the value of using similarly defined endpoints across trials and supports the finding of consistent protection against kidney disease progression with SGLT2 inhibitors as a class in patients with type 2 diabetes mellitus who either have established atherosclerotic cardiovascular disease or are at high cardiovascular risk with multiple cardiovascular risk factors
    corecore